1,644 research outputs found

    Machine Learning for Forecasting Future Reservations’ Ratings : Radisson Blu Seaside in Helsinki

    Get PDF
    In the current age of internet and big data, it is imperative for hotels to enhance their online reputation to remain competitive and profitable. This research presents a new perspective on how hotels can maintain and improve their online reputation through the use of machine learning techniques to predict the ratings of reservations. The approach involves analysing data that customers provide when booking a room. Additionally, the study explores how insights gleaned from online textual reviews can be used by hotel managers to address negative ratings. The study's primary objective is to assess the effectiveness of machine learning in predicting negative instances, a critical factor in managing online reputation. The best performing models achieved a 60% accuracy in classifying negative instances. However, increasing the number of predicted true negative instances also increased the number of false negative instances. This result was primarily due to the unpredictability of customer behaviour, making it difficult to accurately predict ratings. Despite not achieving the desired result, this study presents a novel direction for future research and provides suggestions for future research ideas. By utilizing machine learning algorithms to analyse customer data, hotels can better understand their customer's preferences, allowing them to improve their online reputation and ultimately improve their bottom line

    Correlating sparse sensing for large-scale traffic speed estimation: A Laplacian-enhanced low-rank tensor kriging approach

    Full text link
    Traffic speed is central to characterizing the fluidity of the road network. Many transportation applications rely on it, such as real-time navigation, dynamic route planning, and congestion management. Rapid advances in sensing and communication techniques make traffic speed detection easier than ever. However, due to sparse deployment of static sensors or low penetration of mobile sensors, speeds detected are incomplete and far from network-wide use. In addition, sensors are prone to error or missing data due to various kinds of reasons, speeds from these sensors can become highly noisy. These drawbacks call for effective techniques to recover credible estimates from the incomplete data. In this work, we first identify the issue as a spatiotemporal kriging problem and propose a Laplacian enhanced low-rank tensor completion (LETC) framework featuring both lowrankness and multi-dimensional correlations for large-scale traffic speed kriging under limited observations. To be specific, three types of speed correlation including temporal continuity, temporal periodicity, and spatial proximity are carefully chosen and simultaneously modeled by three different forms of graph Laplacian, named temporal graph Fourier transform, generalized temporal consistency regularization, and diffusion graph regularization. We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging. By performing experiments on two public million-level traffic speed datasets, we finally draw the conclusion and find our proposed LETC achieves the state-of-the-art kriging performance even under low observation rates, while at the same time saving more than half computing time compared with baseline methods. Some insights into spatiotemporal traffic data modeling and kriging at the network level are provided as well

    Towards better traffic volume estimation: Tackling both underdetermined and non-equilibrium problems via a correlation-adaptive graph convolution network

    Full text link
    Traffic volume is an indispensable ingredient to provide fine-grained information for traffic management and control. However, due to limited deployment of traffic sensors, obtaining full-scale volume information is far from easy. Existing works on this topic primarily focus on improving the overall estimation accuracy of a particular method and ignore the underlying challenges of volume estimation, thereby having inferior performances on some critical tasks. This paper studies two key problems with regard to traffic volume estimation: (1) underdetermined traffic flows caused by undetected movements, and (2) non-equilibrium traffic flows arise from congestion propagation. Here we demonstrate a graph-based deep learning method that can offer a data-driven, model-free and correlation adaptive approach to tackle the above issues and perform accurate network-wide traffic volume estimation. Particularly, in order to quantify the dynamic and nonlinear relationships between traffic speed and volume for the estimation of underdetermined flows, a speed patternadaptive adjacent matrix based on graph attention is developed and integrated into the graph convolution process, to capture non-local correlations between sensors. To measure the impacts of non-equilibrium flows, a temporal masked and clipped attention combined with a gated temporal convolution layer is customized to capture time-asynchronous correlations between upstream and downstream sensors. We then evaluate our model on a real-world highway traffic volume dataset and compare it with several benchmark models. It is demonstrated that the proposed model achieves high estimation accuracy even under 20% sensor coverage rate and outperforms other baselines significantly, especially on underdetermined and non-equilibrium flow locations. Furthermore, comprehensive quantitative model analysis are also carried out to justify the model designs

    3D Shape Knowledge Graph for Cross-domain and Cross-modal 3D Shape Retrieval

    Full text link
    With the development of 3D modeling and fabrication, 3D shape retrieval has become a hot topic. In recent years, several strategies have been put forth to address this retrieval issue. However, it is difficult for them to handle cross-modal 3D shape retrieval because of the natural differences between modalities. In this paper, we propose an innovative concept, namely, geometric words, which is regarded as the basic element to represent any 3D or 2D entity by combination, and assisted by which, we can simultaneously handle cross-domain or cross-modal retrieval problems. First, to construct the knowledge graph, we utilize the geometric word as the node, and then use the category of the 3D shape as well as the attribute of the geometry to bridge the nodes. Second, based on the knowledge graph, we provide a unique way for learning each entity's embedding. Finally, we propose an effective similarity measure to handle the cross-domain and cross-modal 3D shape retrieval. Specifically, every 3D or 2D entity could locate its geometric terms in the 3D knowledge graph, which serve as a link between cross-domain and cross-modal data. Thus, our approach can achieve the cross-domain and cross-modal 3D shape retrieval at the same time. We evaluated our proposed method on the ModelNet40 dataset and ShapeNetCore55 dataset for both the 3D shape retrieval task and cross-domain 3D shape retrieval task. The classic cross-modal dataset (MI3DOR) is utilized to evaluate cross-modal 3D shape retrieval. Experimental results and comparisons with state-of-the-art methods illustrate the superiority of our approach

    Nexus sine qua non: Essentially connected neural networks for spatial-temporal forecasting of multivariate time series

    Full text link
    Modeling and forecasting multivariate time series not only facilitates the decision making of practitioners, but also deepens our scientific understanding of the underlying dynamical systems. Spatial-temporal graph neural networks (STGNNs) are emerged as powerful predictors and have become the de facto models for learning spatiotemporal representations in recent years. However, existing architectures of STGNNs tend to be complicated by stacking a series of fancy layers. The designed models could be either redundant or enigmatic, which pose great challenges on their complexity and scalability. Such concerns prompt us to re-examine the designs of modern STGNNs and identify core principles that contribute to a powerful and efficient neural predictor. Here we present a compact predictive model that is fully defined by a dense encoder-decoder and a message-passing layer, powered by node identifications, without any complex sequential modules, e.g., TCNs, RNNs, and Transformers. Empirical results demonstrate how a simple and elegant model with proper inductive basis can compare favorably w.r.t. the state of the art with elaborate designs, while being much more interpretable and computationally efficient for spatial-temporal forecasting problem. We hope our findings would open new horizons for future studies to revisit the design of more concise neural forecasting architectures

    Strategies for applying carbon trading to the new energy vehicle market in China:An improved evolutionary game analysis for the bus industry

    Get PDF
    Application of carbon trading on the consumption-side to subsidize NEVs (called “carbon trading subsidy (CTS)”) is expected to become the successor policy for the phasing-out purchase subsidy, but there is still a gap between practice and theory in how to apply. This study applies CTS to bus industry and addresses the interaction problem between purchase decision of bus operators and policy-implementation decision of the government, and improves evolutionary game theory by considering the incentive effects of strategies within the same group to discuss stable strategies of each parties. Simulation experiments are conducted to validate the correctness of the improved model and investigate the effects of key parameters on the evolution of decision behaviors. The results show that subsidy effect of different carbon prices varies significantly under different cost gaps of new energy buses (NEBs) and fuel buses (FBs), and there is an optimal carbon price range, i.e. 0.163–0.263 CNY/kg in this research. The initial carbon quota less than and close to the carbon emissions of FB can achieve the optimal subsidy effect. Maintaining high-frequency inspections on operators can ensure smooth proliferation of NEBs. Finally, policy recommendations to implement CTS are proposed for different stages of the cost reduction of NEBs.</p

    Assessment of the impacts of landscape patterns on water quality in Trondheim rivers and Fjord, Norway

    Get PDF
    Due to the impacts of hydrological and ecological processes on water quality, discharges from upstream catchments have induced significant pollution to the recipients. This study aims to investigate the possible pollution sources from catchments with different types of land use and landscape patterns and develop the relationships between water quality and the catchment hydro-geological and environmental variables. Data from 10 monitoring sites in Trondheim formulated the basis of the case study. Thermotolerant coliform bacteria (TCB) and total phosphorus (TP) were applied as main indicators to represent the water quality in the recipient rivers, streams and in Trondheim Fjord. Based on the GIS-oriented spatial analysis, 15 hydro-geographical and landscape parameters were selected as explanatory variables. Multiple linear regression (MLR) models were developed at catchment and river reach scales to study correlations between the explanatory variables and the response variables, TCB and TP, in rain and snow seasons. The study showed that the spatial landscape patterns resulted in differences in the concentrations of TCB and TP in the recipients. The agricultural land was shown to be the main pollution source, leading to a higher concentration of TP in streams. Buildings, roads, and other impervious areas have induced an increase in both TCB and TP. In contrast, the forest areas, lakes, river density and steep river slopes were shown to have capacity to filter incoming P-rich runoff, thus prevent pollutant conveyance and accumulation in recipients

    Sparse decomposition based on ADMM dictionary learning for fault feature extraction of rolling element bearing

    Get PDF
    Sparse decomposition is a novel method for the fault diagnosis of rolling element bearing, whether the construction of dictionary model is good or not will directly affect the results of sparse decomposition. In order to effectively extract the fault characteristics of rolling element bearing, a sparse decomposition method based on the over-complete dictionary learning of alternating direction method of multipliers (ADMM) is presented in this paper. In the process of dictionary learning, ADMM is used to update the atoms of the dictionary. Compared with the K-SVD dictionary learning and non-learning dictionary method, the learned ADMM dictionary has a better structure and faster speed in the sparse decomposition. The ADMM dictionary learning method combined with the orthogonal matching pursuit (OMP) is used to implement the sparse decomposition of the vibration signal. The envelope spectrum technique is used to analyze the results of the sparse decomposition for the fault feature extraction of the rolling element bearing. The experimental results show that the ADMM dictionary learning method can updates the dictionary atoms to better fit the original signal data than K-SVD dictionary learning, the high frequency noise in the vibration signal of the rolling bearing can be effectively suppressed, and the fault characteristic frequency can be highlighted, which is very favorable for the fault diagnosis of the rolling element bearing
    • …
    corecore